Search Results

Documents authored by List, Christian


Document
Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171)

Authors: Michael Fisher, Christian List, Marija Slavkovik, and Astrid Weiss

Published in: Dagstuhl Reports, Volume 9, Issue 4 (2019)


Abstract
This report documents the programme of, and outcomes from, the Dagstuhl Seminar 19171 on "Ethics and Trust: Principles, Verification and Validation". We consider the issues of ethics and trust as crucial to the future acceptance and use of autonomous systems. The development of new classes of autonomous systems, such as medical robots, "driver-less" cars, and assistive care robots has opened up questions on how we can integrate truly autonomous systems into our society. Once a system is truly autonomous, i.e. learning from interactions, moving and manipulating the world we are living in, and making decisions by itself, we must be certain that it will act in a safe and ethical way, i.e. that it will be able to distinguish 'right' from `wrong' and make the decisions we would expect of it. In order for society to accept these new machines, we must also trust them, i.e. we must believe that they are reliable and that they are trying to assist us, especially when engaged in close human-robot interaction. The seminar focused on questions of how does trust with autonomous machines evolve, how to build a `practical' ethical and trustworthy system, and what are the societal implications. Key issues included: Change of trust and trust repair, AI systems as decision makers, complex system of norms and algorithmic bias, and potential discrepancies between expectations and capabilities of autonomous machines. This workshop was a follow-up to the 2016 Dagstuhl Seminar 16222 on Engineering Moral Agents: From Human Morality to Artificial Morality. When organizing this workshop we aimed to bring together communities of researchers from moral philosophy and from artificial intelligence and extend it with researchers from (social) robotics and human-robot interaction research.

Cite as

Michael Fisher, Christian List, Marija Slavkovik, and Astrid Weiss. Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171). In Dagstuhl Reports, Volume 9, Issue 4, pp. 59-86, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{fisher_et_al:DagRep.9.4.59,
  author =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Weiss, Astrid},
  title =	{{Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171)}},
  pages =	{59--86},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{9},
  number =	{4},
  editor =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Weiss, Astrid},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.9.4.59},
  URN =		{urn:nbn:de:0030-drops-113046},
  doi =		{10.4230/DagRep.9.4.59},
  annote =	{Keywords: Verification, Artificial Morality, Social Robotics, Machine Ethics, Autonomous Systems, Explain-able AI, Safety, Trust, Mathematical Philosophy, Robot Ethics, Human-Robot Interaction}
}
Document
Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222)

Authors: Michael Fisher, Christian List, Marija Slavkovik, and Alan Winfield

Published in: Dagstuhl Reports, Volume 6, Issue 5 (2016)


Abstract
This report documents the programme of, and outcomes from, the Dagstuhl Seminar 16222 on "Engineering Moral Agents -- from Human Morality to Artificial Morality". Artificial morality is an emerging area of research within artificial intelligence (AI), concerned with the problem of designing artificial agents that behave as moral agents, i.e. adhere to moral, legal, and social norms. Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in making decisions that affect our lives. While humanity has developed formal legal and informal moral and social norms to govern its own social interactions, there are no similar regulatory structures that apply to non-human agents. The seminar focused on questions of how to formalise, "quantify", qualify, validate, verify, and modify the ``ethics" of moral machines. Key issues included the following: How to build regulatory structures that address (un)ethical machine behaviour? What are the wider societal, legal, and economic implications of introducing AI machines into our society? How to develop "computational" ethics and what are the difficult challenges that need to be addressed? When organising this workshop, we aimed to bring together communities of researchers from moral philosophy and from artificial intelligence most concerned with this topic. This is a long-term endeavour, but the seminar was successful in laying the foundations and connections for accomplishing it.

Cite as

Michael Fisher, Christian List, Marija Slavkovik, and Alan Winfield. Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222). In Dagstuhl Reports, Volume 6, Issue 5, pp. 114-137, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{fisher_et_al:DagRep.6.5.114,
  author =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Winfield, Alan},
  title =	{{Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222)}},
  pages =	{114--137},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{5},
  editor =	{Fisher, Michael and List, Christian and Slavkovik, Marija and Winfield, Alan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.6.5.114},
  URN =		{urn:nbn:de:0030-drops-67236},
  doi =		{10.4230/DagRep.6.5.114},
  annote =	{Keywords: Artificial Morality, Machine Ethics, Computational Morality, Autonomous Systems, Intelligent Systems, Formal Ethics, Mathematical Philosophy, Robot Ethics}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail